EKConv: Compressing Convolutional Neural Networks with Evolutionary Kernel Convolution
نویسندگان
چکیده
Abstract Convolutional neural networks (CNNs) have achieved tremendous success in visual recognition tasks but mainly rely on massive learnable parameters. To solve this problem, many effective and efficient convolution operators been proposed, such as group-wise convolution, point-wise depth-wise convolution. However, the above operations model optimize weight relationship within same convolutional layer. reduce network parameters, we explicitly construct between kernels of adjacent layers. Specifically, propose an evolutionary kernel namely EKConv, to generate parameters by efficiently. In particular, EKConv makes current layer inherit from its preceding kernel, which promotes information exchange kernels. More importantly, is a novel plug-and-play module that can be easily embedded into mainstream CNNs. Extensive experimental results show compress CNNs large margin yet barely sacrifice image classification performance.
منابع مشابه
Compressing Convolutional Neural Networks
Convolutional neural networks (CNN) are increasingly used in many areas of computer vision. They are particularly attractive because of their ability to “absorb” great quantities of labeled data through millions of parameters. However, as model sizes increase, so do the storage and memory requirements of the classifiers. We present a novel network architecture, Frequency-Sensitive Hashed Nets (...
متن کاملKernel Graph Convolutional Neural Networks
Graph kernels have been successfully applied to many graph classification problems. Typically, a kernel is first designed, and then an SVM classifier is trained based on the features defined implicitly by this kernel. This two-stage approach decouples data representation from learning, which is suboptimal. On the other hand, Convolutional Neural Networks (CNNs) have the capability to learn thei...
متن کاملCompressing deep convolutional neural networks in visual emotion recognition
In this paper, we consider the problem of insufficient runtime and memory-space complexities of deep convolutional neural networks for visual emotion recognition. A survey of recent compression methods and efficient neural networks architectures is provided. We experimentally compare the computational speed and memory consumption during the training and the inference stages of such methods as t...
متن کاملGeneralizing the Convolution Operator in Convolutional Neural Networks
Convolutional neural networks have become a main tool for solving many machine vision and machine learning problems. A major element of these networks is the convolution operator which essentially computes the inner product between a weight vector and the vectorized image patches extracted by sliding a window in the image planes of the previous layer. In this paper, we propose two classes of su...
متن کاملCompressing Deep Convolutional Networks using Vector Quantization
Deep convolutional neural networks (CNN) has become the most promising method for object recognition, repeatedly demonstrating record breaking results for image classification and object detection in recent years. However, a very deep CNN generally involves many layers with millions of parameters, making the storage of the network model to be extremely large. This prohibits the usage of deep CN...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of physics
سال: 2023
ISSN: ['0022-3700', '1747-3721', '0368-3508', '1747-3713']
DOI: https://doi.org/10.1088/1742-6596/2425/1/012011